431 Class 21

Thomas E. Love, Ph.D.

2024-11-12

Today’s Agenda

  1. Reviewing (and Fixing) What We Have Done So Far
  2. Considering Bayesian alternative fits with weakly informative priors
    • What must we do differently (Bayes vs. OLS)?
  3. Assessing the candidate models more thoroughly, in both the training and test samples
    • MAPE, RMSPE, Maximum Prediction Error, Validated \(R^2\)
  4. Incorporating multiple imputation in building a final model

431 strategy: “most useful” model?

  1. Split the data into a development (model training) sample of about 70-80% of the observations, and a holdout (model test) sample, containing the remaining observations.
  2. Develop candidate models using the development sample.
  3. Assess the quality of fit for candidate models within the development sample.
  4. Check adherence to regression assumptions in the development sample.

431 strategy: “most useful” model?

  1. When you have candidates, assess them based on the accuracy of the predictions they make for the data held out (and thus not used in building the models.)
  2. Select a “final” model for use based on the evidence in steps 3, 4 and especially 5.

R Packages and Data Load

knitr::opts_chunk$set(comment = NA)
library(janitor)
library(mice)
library(naniar)
library(patchwork)
library(car)         ## for vif function as well as Box-Cox
library(GGally)      ## for ggpairs scatterplot matrix
library(broom)       ## for predictions, residuals with augment
library(rstanarm)    ## fitting Bayesian regressions
library(gt)          ## some prettier tables
library(easystats)
library(tidyverse)

source("c21/data/Love-431.R")
theme_set(theme_bw())
dm500 <- readRDS("c21/data/dm500.Rds")

What We’ve Done So Far

Imputation and Partitioning

set.seed(20241031)

dm500_tenimps <- mice(dm500, m = 10, printFlag = FALSE)
dm500_i <- complete(dm500_tenimps, 7) |> tibble()

set.seed(4312024)
dm500_i_train <- dm500_i |> 
  slice_sample(prop = 0.7, replace = FALSE)
dm500_i_test <- 
  anti_join(dm500_i, dm500_i_train, by = "subject")

Fixing My Mistake from Classes 19-20

Three Regression Models We’ve Fit

  • Using the model training sample, and a (100/a1c) outcome transformation.
  • My mistake: once you decide on a transformation, create it before fitting.
dm500_i_train <- dm500_i_train |> mutate(transa1c = 100/a1c)

fit1 <- lm(transa1c ~ a1c_old, data = dm500_i_train)

fit2 <- lm(transa1c ~ a1c_old + age, data = dm500_i_train)

fit3 <- lm(transa1c ~ a1c_old + age + income, 
            data = dm500_i_train)

Performance Indices for 3 Models

  • in the training sample
plot(compare_performance(fit1, fit2, fit3))

OLS Model fit1 Checking

check_model(fit1)

OLS Model fit2 Checking

check_model(fit2)

OLS Model fit3 Checking

check_model(fit3)

augment training samples

aug1 <- augment(fit1, data = dm500_i_train)
aug2 <- augment(fit2, data = dm500_i_train)
aug3 <- augment(fit3, data = dm500_i_train)

augment results for fit2 in our first four subjects…

aug2 |> head() |> gt()
a1c a1c_old age income subject transa1c .fitted .resid .hat .sigma .cooksd .std.resid
11.6 5.6 54 Below_30K S-168 8.62069 15.38293 -6.7622361 0.007351110 2.276248 2.145934e-02 -2.9484254
6.5 6.4 68 Higher_than_50K S-359 15.38462 14.96010 0.4245154 0.008948617 2.305194 1.032820e-04 0.1852435
7.6 7.0 60 Between_30-50K S-421 13.15789 14.18858 -1.0306848 0.003869153 2.304640 2.605607e-04 -0.4486063
7.3 8.0 56 Between_30-50K S-037 13.69863 13.13196 0.5666705 0.002935108 2.305107 5.963655e-05 0.2465282
7.7 7.8 63 Between_30-50K S-030 12.98701 13.49557 -0.5085564 0.004906357 2.305146 8.060899e-05 -0.2214648
6.4 7.0 34 Below_30K S-055 15.62500 13.54996 2.0750381 0.020785932 2.302550 5.871383e-03 0.9109298

Bayesian fits instead?

Refit with Bayesian models?

What must we change to use Bayesian (stan_glm) fits?

  1. Must create transformed outcome in data prior to fitting with rstanarm().
  2. Results are a bit different for model_parameters()
  3. Results are a bit different for model_performance()
  4. No real change for check_models()
  5. There is no augment() function for rstanarm() fits.

Refit with Bayesian models?

with default weakly informative priors

set.seed(20241112)

fit1B <- stan_glm(transa1c ~ a1c_old, 
                  data = dm500_i_train, refresh = 0)
fit2B <- stan_glm(transa1c ~ a1c_old + age, 
                  data = dm500_i_train, refresh = 0)
fit3B <- stan_glm(transa1c ~ a1c_old + age + income, 
            data = dm500_i_train, refresh = 0)

Estimating fit1 Coefficients?

  • Bayesian fit
model_parameters(fit1B, ci = 0.95) |> gt()
Parameter Median CI CI_low CI_high pd Rhat ESS Prior_Distribution Prior_Location Prior_Scale
(Intercept) 20.97661 0.95 19.910242 22.0855930 1 1.000731 4048.198 normal 13.36034 7.270914
a1c_old -0.98244 0.95 -1.118648 -0.8463795 1 1.000882 3928.896 normal 0.00000 4.028203
  • OLS fit
model_parameters(fit1, ci = 0.95) |> gt() |> fmt_number(decimals = 3)
Parameter Coefficient SE CI CI_low CI_high t df_error p
(Intercept) 20.968 0.545 0.950 19.896 22.039 38.490 348.000 0.000
a1c_old −0.982 0.068 0.950 −1.117 −0.847 −14.338 348.000 0.000

Estimating fit2 Coefficients?

model_parameters(fit2B, ci = 0.95) |> gt() |> fmt_number(decimals = 3)
Parameter Median CI CI_low CI_high pd Rhat ESS Prior_Distribution Prior_Location Prior_Scale
(Intercept) 19.406 0.950 17.316 21.372 1.000 1.001 4,749.509 normal 13.360 7.271
a1c_old −0.957 0.950 −1.087 −0.817 1.000 1.001 4,638.566 normal 0.000 4.028
age 0.025 0.950 −0.002 0.052 0.962 1.000 5,188.851 normal 0.000 0.794
model_parameters(fit2, ci = 0.95) |> gt() |> fmt_number(decimals = 3)
Parameter Coefficient SE CI CI_low CI_high t df_error p
(Intercept) 19.423 1.019 0.950 17.420 21.427 19.064 347.000 0.000
a1c_old −0.958 0.070 0.950 −1.095 −0.822 −13.786 347.000 0.000
age 0.025 0.014 0.950 −0.002 0.052 1.791 347.000 0.074

Estimating fit3 Coefficients?

model_parameters(fit3B, ci = 0.95) |> gt() |> fmt_number(decimals = 3)
Parameter Median CI CI_low CI_high pd Rhat ESS Prior_Distribution Prior_Location Prior_Scale
(Intercept) 19.487 0.950 17.457 21.626 1.000 1.000 3,032.152 normal 13.360 7.271
a1c_old −0.953 0.950 −1.088 −0.815 1.000 1.000 4,061.029 normal 0.000 4.028
age 0.023 0.950 −0.004 0.051 0.953 1.000 3,711.976 normal 0.000 0.794
incomeBetween_30-50K 0.058 0.950 −0.580 0.697 0.566 1.001 2,710.381 normal 0.000 14.803
incomeBelow_30K −0.209 0.950 −0.831 0.454 0.734 1.000 2,758.186 normal 0.000 15.100
model_parameters(fit3, ci = 0.95) |> gt() |> fmt_number(decimals = 3)
Parameter Coefficient SE CI CI_low CI_high t df_error p
(Intercept) 19.492 1.069 0.950 17.390 21.594 18.240 345.000 0.000
a1c_old −0.952 0.070 0.950 −1.090 −0.815 −13.608 345.000 0.000
age 0.023 0.014 0.950 −0.004 0.051 1.687 345.000 0.092
incomeBetween_30-50K 0.052 0.323 0.950 −0.585 0.688 0.160 345.000 0.873
incomeBelow_30K −0.214 0.334 0.950 −0.870 0.442 −0.641 345.000 0.522

Model Performance with fit1B?

model_performance(fit1B) |> gt() |> tab_options(table.font.size = 24)
ELPD ELPD_SE LOOIC LOOIC_SE WAIC R2 R2_adjusted RMSE Sigma
-792.0265 15.45717 1584.053 30.91434 1584.023 0.3699966 0.3672415 2.302677 2.312659
  • R2 = “unadjusted” Bayesian \(R^2\) (see r2_bayes for details)
  • R2_adjusted = leave-one-out cross-validation (LOO) adjusted \(R^2\), which is conceptually closer to our OLS adjusted \(R^2\) than some other options.
  • RMSE = root mean squared error (standard deviation of the unexplained variance) and lower values mean better fit.

Bayesian Model Performance

model_performance(fit2B) |> gt() |> tab_options(table.font.size = 24)
ELPD ELPD_SE LOOIC LOOIC_SE WAIC R2 R2_adjusted RMSE Sigma
-791.2411 15.42323 1582.482 30.84646 1582.476 0.3752328 0.3691855 2.292108 2.301049
  • Sigma = residual standard deviation (interpret in same way as RMSE)
  • WAIC = widely applicable information criterion. Lower WAIC values mean better fit.

Bayesian Model Performance

model_performance(fit3B) |> gt() |> tab_options(table.font.size = 24)
ELPD ELPD_SE LOOIC LOOIC_SE WAIC R2 R2_adjusted RMSE Sigma
-792.8554 15.42613 1585.711 30.85225 1585.686 0.3782747 0.3614431 2.289063 2.306763
  • LOOIC = leave-one-out cross-validation (LOO) information criterion. Lower LOOIC values mean better fit.
  • LOOIC_SE = standard error of LOOIC

See this link on “Performance of Bayesian Models” for still more options.

Performance within Training Sample?

compare_performance(fit1B, fit2B, fit3B, rank = TRUE) |> 
  gt() |> fmt_number(decimals = 3)
Name Model R2 R2_adjusted RMSE Sigma WAIC_wt LOOIC_wt Performance_Score
fit2B stanreg 0.375 0.369 2.292 2.301 0.602 0.781 0.901
fit3B stanreg 0.378 0.361 2.289 2.307 0.121 0.000 0.418
fit1B stanreg 0.370 0.367 2.303 2.313 0.277 0.219 0.226
compare_performance(fit1, fit2, fit3, rank = TRUE) |> 
  gt() |> fmt_number(decimals = 3)
Name Model R2 R2_adjusted RMSE Sigma AIC_wt AICc_wt BIC_wt Performance_Score
fit2 lm 0.377 0.374 2.292 2.302 0.568 0.568 0.211 0.831
fit3 lm 0.379 0.372 2.289 2.306 0.123 0.116 0.001 0.433
fit1 lm 0.371 0.370 2.303 2.309 0.308 0.316 0.788 0.265

Bayes Performance Indicators?

plot(compare_performance(fit1B, fit2B, fit3B))

Checking Bayesian fit1B

check_model(fit1B)

Checking Bayesian fit2B

check_model(fit2B)

Checking Bayesian fit3B

check_model(fit3B)

No augment from Bayes fits

The augment() function from broom doesn’t work with Bayesian fits using rstanarm().

We still want to get our predicted (fitted) values and residuals for each observation in our training sample.

aug_1B <- dm500_i_train |>
  mutate(.fitted = predict(fit1B),
         .resid = transa1c - predict(fit1B))
aug_2B <- dm500_i_train |>
  mutate(.fitted = predict(fit2B),
         .resid = transa1c - predict(fit2B))
aug_3B <- dm500_i_train |>
  mutate(.fitted = predict(fit3B),
         .resid = transa1c - predict(fit3B))

fit2 model (OLS vs Bayes)

aug2 |> select(1:8) |> head(3)
# A tibble: 3 × 8
    a1c a1c_old   age income          subject transa1c .fitted .resid
  <dbl>   <dbl> <dbl> <fct>           <chr>      <dbl>   <dbl>  <dbl>
1  11.6     5.6    54 Below_30K       S-168       8.62    15.4 -6.76 
2   6.5     6.4    68 Higher_than_50K S-359      15.4     15.0  0.425
3   7.6     7      60 Between_30-50K  S-421      13.2     14.2 -1.03 
aug_2B |> head(3)
# A tibble: 3 × 8
    a1c a1c_old   age income          subject transa1c .fitted .resid
  <dbl>   <dbl> <dbl> <fct>           <chr>      <dbl>   <dbl>  <dbl>
1  11.6     5.6    54 Below_30K       S-168       8.62    15.4 -6.76 
2   6.5     6.4    68 Higher_than_50K S-359      15.4     15.0  0.419
3   7.6     7      60 Between_30-50K  S-421      13.2     14.2 -1.04 

Extending to the Test Sample

Making Predictions: Test Sample

  • Create transformed outcome in test sample.
  • For OLS fits, apply augment() to new data = test sample.
dm500_i_test <- dm500_i_test |> mutate(transa1c = 100/a1c)

aug1_test <- augment(fit1, newdata = dm500_i_test) |> 
  mutate(mod = "fit1")
aug2_test <- augment(fit2, newdata = dm500_i_test) |> 
  mutate(mod = "fit2")
aug3_test <- augment(fit3, newdata = dm500_i_test) |> 
  mutate(mod = "fit3")

Results on next slide…

augment for each model

aug1_test |> head(2) |> gt()
a1c a1c_old age income subject transa1c .fitted .resid mod
8.7 10.7 47 Higher_than_50K S-003 11.49425 10.46152 1.032736 fit1
6.7 6.3 64 Between_30-50K S-005 14.92537 14.78184 0.143537 fit1
aug2_test |> head(2) |> gt()
a1c a1c_old age income subject transa1c .fitted .resid mod
8.7 10.7 47 Higher_than_50K S-003 11.49425 10.32330 1.17095515 fit2
6.7 6.3 64 Between_30-50K S-005 14.92537 14.95769 -0.03231506 fit2
aug3_test |> head(2) |> gt()
a1c a1c_old age income subject transa1c .fitted .resid mod
8.7 10.7 47 Higher_than_50K S-003 11.49425 10.40699 1.0872627 fit3
6.7 6.3 64 Between_30-50K S-005 14.92537 15.04821 -0.1228401 fit3

Combine Augmented Results

temp12 <- bind_rows(aug1_test, aug2_test)
test_res <- bind_rows(temp12, aug3_test) |>
  relocate(subject, mod, a1c, everything()) |>
  arrange(subject, mod)

test_res |> head() |> gt() |> tab_options(table.font.size = 24)
subject mod a1c a1c_old age income transa1c .fitted .resid
S-003 fit1 8.7 10.7 47 Higher_than_50K 11.49425 10.46152 1.03273565
S-003 fit2 8.7 10.7 47 Higher_than_50K 11.49425 10.32330 1.17095515
S-003 fit3 8.7 10.7 47 Higher_than_50K 11.49425 10.40699 1.08726268
S-005 fit1 6.7 6.3 64 Between_30-50K 14.92537 14.78184 0.14353701
S-005 fit2 6.7 6.3 64 Between_30-50K 14.92537 14.95769 -0.03231506
S-005 fit3 6.7 6.3 64 Between_30-50K 14.92537 15.04821 -0.12284005

Calculating Prediction Errors

For each of our (OLS) models, we have:

  • a1c, the outcome we actually care about
  • transa1c = 100/a1c, the transformed outcome
  • .fitted, a predicted transa1c from the model

What we want is

  • a1c_pred, the predicted a1c using this model, and
  • a1c_error, the error made in predicting a1c

Back-Transformation

Each .fitted value is a prediction of 100/a1c.

\[ \frac{100}{a1c} = \mbox{.fitted}, \mbox{ so } \frac{1}{a1c} = \frac{\mbox{.fitted}}{100}, \mbox{ and so}\\ a1c = \frac{100}{\mbox{.fitted}} \]

To get a1c_pred we need to back-transform our .fitted values, with 100/.fitted, it seems.

Adding predicted A1c and error

  • We add a1c_pred, the predicted a1c using this model, and
  • a1c_error = error for this model in predicting a1c
test_res <- test_res |> 
  mutate(a1c_pred = 100/.fitted,
         a1c_error = a1c - a1c_pred)

test_res |> head(4) |> gt() 
subject mod a1c a1c_old age income transa1c .fitted .resid a1c_pred a1c_error
S-003 fit1 8.7 10.7 47 Higher_than_50K 11.49425 10.46152 1.032736 9.558843 -0.85884294
S-003 fit2 8.7 10.7 47 Higher_than_50K 11.49425 10.32330 1.170955 9.686827 -0.98682708
S-003 fit3 8.7 10.7 47 Higher_than_50K 11.49425 10.40699 1.087263 9.608926 -0.90892613
S-005 fit1 6.7 6.3 64 Between_30-50K 14.92537 14.78184 0.143537 6.765059 -0.06505944

Summarize Errors for Each Fit

We’ll look at four summary measures in 431…

  1. MAPE = mean absolute prediction error
  2. Max_APE = maximum absolute prediction error
  3. RMSPE = square root of mean squared prediction error
  4. Validated \(R^2\) = squared correlation of predictions and outcome in the new data

Measures 1-3 are all in the same units as a1c, our outcome.

Building our Four Summaries

The test_res tibble contains, for each model,

  • a1c, the actual outcome value
  • a1c_pred, the predicted outcome value
  • a1c_error = a1c - a1c_pred = prediction error
test_summary <- test_res |> 
  group_by(mod) |>
  summarize(MAPE = mean(abs(a1c_error)),
            Max_APE = max(abs(a1c_error)),
            RMSE = sqrt(mean(a1c_error^2)),
            R2_val = cor(a1c, a1c_pred)^2)

Table for Test-Sample Prediction

test_summary |> gt() |> 
  fmt_number(decimals = 4) |> tab_options(table.font.size = 30)
mod MAPE Max_APE RMSE R2_val
fit1 1.0825 4.9541 1.5404 0.4350
fit2 1.0770 5.0353 1.5310 0.4430
fit3 1.0822 5.0564 1.5315 0.4409
  • Which model is best, by these metrics?
  • fit2 has smallest MAPE, RMSPE, and largest validated \(R^2\)
  • fit1 has smallest MaxAPE

What about a Bayesian fit?

Everything is the same as OLS, except we’d have to work around the use of augment in our test data, but predict() can handle what we need, as below.

test_1B <- dm500_i_test |>
  mutate(a1c_pred = predict(fit1B, newdata = dm500_i_test),
         a1c_error = transa1c - a1c_pred)
test_1B |> head(4) |> gt()
a1c a1c_old age income subject transa1c a1c_pred a1c_error
8.7 10.7 47 Higher_than_50K S-003 11.49425 10.45906 1.0351920
6.7 6.3 64 Between_30-50K S-005 14.92537 14.78261 0.1427673
8.4 8.6 44 Between_30-50K S-014 11.90476 12.52257 -0.6178090
5.7 5.7 52 Between_30-50K S-015 17.54386 15.37218 2.1716795

Putting it all together

Comparing fit1, fit2, fit3

  • fit1 includes a1c_old, fit2 adds age, fit3 adds income
  • Similar model assumption problems (hard-to-ignore problem with linearity, maybe some non-constant variance)
  • training sample: fit2 performed better than the others on adjusted \(R^2\), AIC and \(\sigma\), while fit1 was best on BIC, and fit3 was best on RMSE.
  • test sample: fit2 performed better on MAPE, RMSPE and validated \(R^2\), while fit1 had the smallest maximum APE.
  • Differences between models on most metrics were modest.

Let’s Choose Model fit2

Complete Case Analysis

  • Project B Study 2: Report the model building process, displaying conclusions from training sample (transformation decisions, fitted parameters with CIs and interpretations, comparison of performance metrics, checks of model assumptions) and from test sample (comparison of predictive error)

  • Of course, here we did some imputation, so we’d need to do this all over again to get these results.

Simple Imputation

  • Project B Study 2: Report the model building process, displaying conclusions from training sample (transformation decisions, fitted parameters with CIs and interpretations, comparison of performance metrics, checks of model assumptions) and from test sample (comparison of predictive error)

  • Sometimes, we do some of the steps above without reporting them to the public, of course. But that’s not the goal here.

Model fit2 using Single Imputation

Estimated using a single imputation in training sample.

n_obs(fit2)
[1] 350
model_parameters(fit2, ci = 0.95) |> gt() |> 
  fmt_number(decimals = 3) |> tab_options(table.font.size = 24)
Parameter Coefficient SE CI CI_low CI_high t df_error p
(Intercept) 19.423 1.019 0.950 17.420 21.427 19.064 347.000 0.000
a1c_old −0.958 0.070 0.950 −1.095 −0.822 −13.786 347.000 0.000
age 0.025 0.014 0.950 −0.002 0.052 1.791 347.000 0.074

Model fit2 using Single Imputation

glance(fit2) |> select(1:6) |> gt() |> 
  tab_options(table.font.size = 24)
r.squared adj.r.squared sigma statistic p.value df
0.3771109 0.3735207 2.301984 105.0407 2.139262e-36 2
glance(fit2) |> select(7:12) |> gt() |> 
  tab_options(table.font.size = 24)
logLik AIC BIC deviance df.residual nobs
-786.942 1581.884 1597.316 1838.799 347 350

Incorporating Multiple Imputations

  • Same as Project B Study 2: Report the model building process, displaying conclusions from training sample (transformation decisions, fitted parameters with CIs and interpretations, comparison of performance metrics, checks of model assumptions) and from test sample (comparison of predictive error)

  • New Then add information on fitted parameters across the whole data set (not split into training and testing) after multiple (in this case, 10) imputations.

Using Ten Imputations

dm500_tenimps contained mice results across 10 imputations, then we used the 7th in our work. Let’s build a new set of results across the model we’ve settled on, with a fresh set of 10 imputations.

  • The original data (with missing values) are in dm500 - we need only to add our transformed outcome, 100/a1c.
dm500 <- dm500 |> mutate(transa1c = 100/a1c)

imp2_ests <- dm500 |>
  mice(m = 10, seed = 431, print = FALSE) |>
  with(lm(transa1c ~ a1c_old + age)) |>
  pool()

Estimates across 10 imputations

model_parameters(imp2_ests, ci = 0.95) |> gt() |> 
  fmt_number(decimals = 3) |> tab_options(table.font.size = 24)
Parameter Coefficient SE CI CI_low CI_high t df_error p
(Intercept) 19.933 0.848 0.950 18.266 21.600 23.495 485.520 0.000
a1c_old −0.998 0.060 0.950 −1.115 −0.881 −16.771 465.427 0.000
age 0.021 0.012 0.950 −0.002 0.043 1.786 482.672 0.075
glance(imp2_ests) |> gt() |> tab_options(table.font.size = 24)
nimp nobs r.squared adj.r.squared
10 500 0.3842468 0.3817684
n_obs(imp2_ests)
 [1] 500 500 500 500 500 500 500 500 500 500

What’s next?

  • Do most of this again, including some additional bells and whistles, and some new data.

  • Multiple imputation with Bayesian linear models is a topic for 432.

Session Information

xfun::session_info()
R version 4.4.1 (2024-06-14 ucrt)
Platform: x86_64-w64-mingw32/x64
Running under: Windows 11 x64 (build 22631)

Locale:
  LC_COLLATE=English_United States.utf8 
  LC_CTYPE=English_United States.utf8   
  LC_MONETARY=English_United States.utf8
  LC_NUMERIC=C                          
  LC_TIME=English_United States.utf8    

Package version:
  abind_1.4-8          askpass_1.2.1        backports_1.5.0     
  base64enc_0.1-3      bayesplot_1.11.1     bayestestR_0.15.0   
  BH_1.84.0.0          bigD_0.3.0           bit_4.5.0           
  bit64_4.5.2          bitops_1.0.9         blob_1.2.4          
  boot_1.3-31          broom_1.0.7          bslib_0.8.0         
  cachem_1.1.0         callr_3.7.6          car_3.1-3           
  carData_3.0-5        cellranger_1.1.0     checkmate_2.3.2     
  cli_3.6.3            clipr_0.8.0          coda_0.19-4.1       
  codetools_0.2-20     colorspace_2.1-1     colourpicker_1.3.0  
  commonmark_1.9.2     compiler_4.4.1       conflicted_1.2.0    
  correlation_0.8.6    cowplot_1.1.3        cpp11_0.5.0         
  crayon_1.5.3         crosstalk_1.2.1      curl_5.2.3          
  data.table_1.16.2    datasets_4.4.1       datawizard_0.13.0   
  DBI_1.2.3            dbplyr_2.5.0         Deriv_4.1.6         
  desc_1.4.3           digest_0.6.37        distributional_0.5.0
  doBy_4.6.24          dplyr_1.1.4          DT_0.33             
  dtplyr_1.3.1         dygraphs_1.1.1.6     easystats_0.7.3     
  effectsize_0.8.9     emmeans_1.10.5       estimability_1.5.1  
  evaluate_1.0.1       fansi_1.0.6          farver_2.1.2        
  fastmap_1.2.0        fontawesome_0.5.2    forcats_1.0.0       
  foreach_1.5.2        Formula_1.2-5        fs_1.6.5            
  gargle_1.5.2         generics_0.1.3       GGally_2.2.1        
  ggplot2_3.5.1        ggrepel_0.9.6        ggridges_0.5.6      
  ggstats_0.7.0        glmnet_4.1-8         glue_1.8.0          
  googledrive_2.1.1    googlesheets4_1.1.1  graphics_4.4.1      
  grDevices_4.4.1      grid_4.4.1           gridExtra_2.3       
  gt_0.11.1            gtable_0.3.6         gtools_3.9.5        
  haven_2.5.4          highr_0.11           hms_1.1.3           
  htmltools_0.5.8.1    htmlwidgets_1.6.4    httpuv_1.6.15       
  httr_1.4.7           ids_1.0.1            igraph_2.1.1        
  inline_0.3.19        insight_0.20.5       isoband_0.2.7       
  iterators_1.0.14     janitor_2.2.0        jomo_2.7-6          
  jquerylib_0.1.4      jsonlite_1.8.9       juicyjuice_0.1.0    
  knitr_1.49           labeling_0.4.3       later_1.3.2         
  lattice_0.22-6       lazyeval_0.2.2       lifecycle_1.0.4     
  lme4_1.1-35.5        loo_2.8.0            lubridate_1.9.3     
  magrittr_2.0.3       markdown_1.13        MASS_7.3-61         
  Matrix_1.7-0         MatrixModels_0.5.3   matrixStats_1.4.1   
  memoise_2.0.1        methods_4.4.1        mgcv_1.9-1          
  mice_3.16.0          microbenchmark_1.5.0 mime_0.12           
  miniUI_0.1.1.1       minqa_1.2.8          mitml_0.4-5         
  modelbased_0.8.9     modelr_0.1.11        multcomp_1.4-26     
  munsell_0.5.1        mvtnorm_1.3-1        naniar_1.1.0        
  nlme_3.1-164         nloptr_2.1.1         nnet_7.3-19         
  norm_1.0.11.1        numDeriv_2016.8.1.1  openssl_2.2.2       
  ordinal_2023.12.4.1  pan_1.9              parallel_4.4.1      
  parameters_0.23.0    patchwork_1.3.0      pbkrtest_0.5.3      
  performance_0.12.4   pillar_1.9.0         pkgbuild_1.4.5      
  pkgconfig_2.0.3      plyr_1.8.9           posterior_1.6.0     
  prettyunits_1.2.0    processx_3.8.4       progress_1.2.3      
  promises_1.3.0       ps_1.8.1             purrr_1.0.2         
  quantreg_5.99        QuickJSR_1.4.0       R6_2.5.1            
  ragg_1.3.3           rappdirs_0.3.3       RColorBrewer_1.1-3  
  Rcpp_1.0.13          RcppEigen_0.3.4.0.2  RcppParallel_5.1.9  
  reactable_0.4.4      reactR_0.6.1         readr_2.1.5         
  readxl_1.4.3         rematch_2.0.0        rematch2_2.1.2      
  report_0.5.9         reprex_2.1.1         reshape2_1.4.4      
  rlang_1.1.4          rmarkdown_2.29       rpart_4.1.23        
  rstan_2.32.6         rstanarm_2.32.1      rstantools_2.4.0    
  rstudioapi_0.17.1    rvest_1.0.4          sandwich_3.1-1      
  sass_0.4.9           scales_1.3.0         see_0.9.0           
  selectr_0.4.2        shape_1.4.6.1        shiny_1.9.1         
  shinyjs_2.1.0        shinystan_2.6.0      shinythemes_1.2.0   
  snakecase_0.11.1     sourcetools_0.1.7.1  SparseM_1.84.2      
  splines_4.4.1        StanHeaders_2.32.10  stats_4.4.1         
  stats4_4.4.1         stringi_1.8.4        stringr_1.5.1       
  survival_3.7-0       sys_3.4.3            systemfonts_1.1.0   
  tensorA_0.36.2.1     textshaping_0.4.0    TH.data_1.1-2       
  threejs_0.3.3        tibble_3.2.1         tidyr_1.3.1         
  tidyselect_1.2.1     tidyverse_2.0.0      timechange_0.3.0    
  tinytex_0.54         tools_4.4.1          tzdb_0.4.0          
  ucminf_1.2.2         UpSetR_1.4.0         utf8_1.2.4          
  utils_4.4.1          uuid_1.2.1           V8_6.0.0            
  vctrs_0.6.5          viridis_0.6.5        viridisLite_0.4.2   
  visdat_0.6.0         vroom_1.6.5          withr_3.0.2         
  xfun_0.48            xml2_1.3.6           xtable_1.8-4        
  xts_0.14.1           yaml_2.3.10          zoo_1.8-12